The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.
translated by 谷歌翻译
深度学习的成功导致了包括计算机视觉在内的许多计算机科学领域的快速转变和增长。在这项工作中,我们通过从媒体考古学的角度分析研究论文中的数字和表,通过计算机视觉研究论文本身来研究这种增长的影响。我们通过对涵盖计算机视觉,图形和可视化的资深研究人员的访谈以及十年的视觉会议论文的计算分析进行了调查。我们的分析重点是在广告,衡量和传播日益商品化的“贡献”中发挥作用的要素。我们认为,这些元素中的每一个都由计算机视觉的气候塑造和塑造,最终为该商品化做出了贡献。通过这项工作,我们试图激励有关研究论文设计和更广泛的社会技术出版系统的未来讨论。
translated by 谷歌翻译
通常对视觉动作识别的机器学习模型进行了对与某些对象相关联的特定情况的数据训练和测试。这是一个悬而未决的问题,训练集中的行动对象关联如何影响模型超出受过训练情况的能力。我们着手确定培训数据的属性,这些训练数据可导致具有更大泛化能力的行动识别模型。为此,我们从一种称为跨态学习的认知机制中汲取灵感,该机制指出,人类学习者通过在不同情况下观察相同概念的实例来提取概念的含义。我们对各种类型的动作对象关联进行受控实验,并在训练数据中识别动作对象共发生的关键特性,从而导致更好的分类器。鉴于数据集中缺少这些属性,这些属性通常用于培训计算机视觉文献中的动作分类器,因此我们的工作提供了有关如何最好地构建数据集以有效培训以进行更好概括的有用见解。
translated by 谷歌翻译
基于LIDAR的3D对象检测的先前工作主要集中在单帧范式上。在本文中,我们建议通过利用多个帧的时间信息(即点云视频)来检测3D对象。我们从经验上将时间信息分为短期和长期模式。为了编码短期数据,我们提出了一个网格消息传递网络(GMPNET),该网络将每个网格(即分组点)视为节点,并用邻居网格构造K-NN图。为了更新网格的功能,gmpnet迭代从其邻居那里收集信息,从而从附近的框架中挖掘了运动提示。为了进一步汇总长期框架,我们提出了一个细心的时空变压器GRU(AST-GRU),其中包含空间变压器注意(STA)模块和颞变压器注意(TTA)模块。 STA和TTA增强了香草gru,以专注于小物体并更好地对齐运动对象。我们的整体框架支持点云中的在线和离线视频对象检测。我们基于普遍的基于锚和锚的探测器实现算法。关于挑战性的Nuscenes基准的评估结果显示了我们方法的出色表现,在提交论文时,在没有任何铃铛和哨声的情况下在排行榜上获得了第一个。
translated by 谷歌翻译
基于蒸馏的压缩网络的性能受蒸馏质量的管辖。大型网络(教师)到较小网络(学生)的次优蒸馏的原因主要归因于给定教师与学生对的学习能力中的差距。虽然很难蒸馏所有教师的知识,但可以在很大程度上控制蒸馏质量以实现更好的性能。我们的实验表明,蒸馏品质主要受教师响应的质量来限制,这反过来又受到其反应中存在相似信息的影响。训练有素的大容量老师在学习细粒度辨别性质的过程中丢失了类别之间的相似性信息。没有相似性信息导致蒸馏过程从一个例子 - 许多阶级学习减少到一个示例 - 一类学习,从而限制了教师的不同知识的流程。由于隐式假设只能蒸馏出灌输所知,而不是仅关注知识蒸馏过程,我们仔细审查了知识序列过程。我们认为,对于给定的教师 - 学生对,通过在训练老师的同时找到批量大小和时代数量之间的甜蜜点,可以提高蒸馏品。我们讨论了找到这种甜蜜点以便更好地蒸馏的步骤。我们还提出了蒸馏假设,以区分知识蒸馏和正则化效果之间的蒸馏过程的行为。我们在三个不同的数据集中进行我们的所有实验。
translated by 谷歌翻译
视频分割,即将视频帧分组到多个段或对象中,在广泛的实际应用中扮演关键作用,例如电影中的视觉效果辅助,自主驾驶中的现场理解,以及视频会议中的虚拟背景创建,名称一些。最近,由于计算机愿景中的联系复兴,一直存在众多深度学习的方法,这一直专用于视频分割并提供引人注目的性能。在这项调查中,通过引入各自的任务设置,背景概念,感知需要,开发历史,以及开发历史,综合审查这一领域的两种基本研究,即在视频和视频语义分割中,即视频和视频语义分割中的通用对象分段(未知类别)。主要挑战。我们还提供关于两种方法和数据集的代表文学的详细概述。此外,我们在基准数据集中呈现了审查方法的定量性能比较。最后,我们指出了这一领域的一套未解决的开放问题,并提出了进一步研究的可能机会。
translated by 谷歌翻译
已发现分段模型容易受到针对性和非靶向的对抗攻击。然而,所产生的分割输出通常会损坏,即易于发现攻击。在本文中,我们提出了语义隐秘的对抗性攻击,可以在同时保留非针对性标签的同时操纵有针对性的标签。一个挑战是在数据集和模型中进行语义上有意义的操纵。另一个挑战是避免损害非目标标签。为了解决这些挑战,我们将每个输入映像视为生成扰动的先验知识。我们还设计了一个特殊的常规器,以帮助提取功能。为了评估我们的模型的性能,我们设计了三种基本攻击类型,即“消失在上下文”,“嵌入虚假标签”和“位移目标对象”。我们的实验表明,我们的隐身对抗模型可以攻击在城市景观,马地图和BDD100K上具有相对高的成功率的分段模型。我们的框架在数据集和模型上显示了良好的经验概括。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译